Mild solutions to the dynamic programming equation for stochastic optimal control problems

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Dynamic Programming Equation for Second Order Stochastic Target Problems

Motivated by applications in mathematical finance [3] and stochastic analysis [16], we continue our study of second order backward stochastic equations (2BSDE). In this paper, we derive the dynamic programming equation for a certain class of problems which we call as the second order stochastic target problems. In contrast with previous formulations of similar problems, we restrict control proc...

متن کامل

Dynamic consistency for stochastic optimal control problems

For a sequence of dynamic optimization problems, we aim at discussing a notion of consistency over time. This notion can be informally introduced as follows. At the very first time step t0, the decision maker formulates an optimization problem that yields optimal decision rules for all the forthcoming time step t0, t1, . . . , T ; at the next time step t1, he is able to formulate a new optimiza...

متن کامل

Stochastic Dynamic Programming with Markov Chains for Optimal Sustainable Control of the Forest Sector with Continuous Cover Forestry

We present a stochastic dynamic programming approach with Markov chains for optimal control of the forest sector. The forest is managed via continuous cover forestry and the complete system is sustainable. Forest industry production, logistic solutions and harvest levels are optimized based on the sequentially revealed states of the markets. Adaptive full system optimization is necessary for co...

متن کامل

A Method for Solving Optimal Control Problems Using Genetic Programming

This paper deals with a novel method for solving optimal control problems based on genetic programming. This approach produces some trial solutions and seeks the best of them. If the solution cannot be expressed in a closed analytical form then our method produces an approximation with a controlled level of accuracy. Using numerical examples, we will demonstrate how to use the results.

متن کامل

Local Solutions to the Hamilton – Jacobi – Bellman Equation in Stochastic Problems of Optimal Control

We suggest a method for solving control problems by using linear stochastic systems with functionals qua-dratic in the phase variable and constraints on the absolute values of control actions. Many problems of optimal control of mechanical systems under random perturbations can be written in the form (1) Here, the w = w i (s) are independent Wiener processes; the ν = ν i are independent Poisson...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Automatica

سال: 2018

ISSN: 0005-1098

DOI: 10.1016/j.automatica.2018.02.008